Skip to main content

Full text of "BSTJ 48: 5. May-June 1969: A System Approach to Quantization and Transmission Error. (Buchner, M.M. Jr.)"

See other formats


A System Approach to Quantization 
and Transmission Error 

By M. M. BUCHNER, JR. 

(Manuscript received October 10, 1968) 

In a system designed to quantize the output of an analog data source and 
to transmit this information over a digital channel, errors are introduced by 
the quantization and transmission processes. Quantization resolution can be 
improved by using all positions available in a data stream to carry informa- 
tion, or transmission accuracy can be improved if some oj the positions are 
used for redundancy with error-correcting codes. The problem is to deter- 
mine, from a system viewpoint, the proper allocation of the available posi- 
tions in order to reduce the average system error rather than concentrate 
exclusively on either the quantization problem or the transmission problem. 

We develop a criterion for the performance of data transmission systems 
based upon the numerical error that occurs between the analog source and 
the destination. The criterion, termed the average system error, is used to 
evaluate and compare possible system configurations. Significant-bit packed 
codes are defined. These codes are useful because their protection can be 
matched to the numerical significance of the data and their redundancy can be 
sufficiently small to maintain good quantization resolution. The average sys- 
tem error resulting from representative system designs is numerically 
evaluated and compared. 

I. INTRODUCTION 

When designing a system to sample the output of an analog data 
source and to transmit the samples over a digital channel, the usual 
approach is to consider the errors introduced by quantization and 
transmission as separate problems. However, from a system view- 
point, a conflict arises. On the one hand, the quantization resolution 
can be improved by using all of the available positions in a data 
stream to carry information. Alternatively, the transmission accuracy 
can be improved if redundancy and error-correcting codes are intro- 
duced by converting some of the information positions into parity 

1219 



1220 THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 

check positions. The problem then is to determine the proper alloca- 
tion of the available symbols in order to reduce the average system 
error rather than concentrate exclusively on either the quantization 
problem or the transmission problem. 

We consider a data transmission system with uniform quantization. 
The average absolute error that occurs between the analog source 
and the destination is used as the criterion of system performance. 
The criterion, termed the average system error (ase) , is used to evalu- 
ate and compare the effectiveness of various systems. 

Some work has been done on the design of error-correcting codes 
which provide different amounts of protection for different positions 
within a code word. In Ref. 1, the general algebraic properties of these 
codes, referred to as unequal error protection codes, were investigated. 
In Ref. 2, significant-bit codes (which turn out to be a subclass of un- 
equal error protection codes) and a criterion for evaluating the per- 
formance of codes for the transmission of numerical data were devel- 
oped. 

In this paper, we define packed codes and significant-bit packed 
codes, we analyze their performance, and we numerically evaluate the 
average system error resulting from the use of representative quatiza- 
tion resolutions and coding schemes. 

II. PRELIMINARIES 

We consider a binary symmetric channel in which the errors are 
independent of the symbols actually transmitted. In the numerical 
examples, we further assume that the errors occur independently 
with probability p = 1 — q. The error-correcting codes to be discussed 
are binary block codes in which the code vectors form a group under 
component by component modulo 2 addition. Let n denote the block 
length and k denote the number of information positions per code 
vector. The notation (n, k) is used to denote such a code. A complete 
discussion of these codes is contained in Ref. 3. 

The encoder receives k binary information symbols [called a mes- 
sage and denoted by (V) C , v k -\, ••• , V\)] as an input and deter- 
mines from the message (n — k) binary parity check symbols. The 
decoder operates upon the blocks of n binary symbols coming from the 
channel in an attempt to correct transmission errors and provides k 
binary symbols at its output. 

Let H denote the parity check matrix for such a code. An n-tuple u 
is a code vector if and only if 



TRANSMISSION ERROR 



1221 



uH = (1) 

where Tl is the transpose of H. The matrix H can be written in the form 

H = \C k , Ct-i , • • • , C\I n -k) 

where C,(l ^ i £ fc) is the column of # in the position corresponding 
to information position v, in a code vector and /„_* is the (w — /c) X 
(n — k) identity matrix. 
When the integer s is to be sent, the message used is B k (s) such that* 



where 



B k (s) = (v k , y t _! , • • • , i>0 



= E *-\ ■ 



The parity check symbols E(s) are chosen so that the code vector 
C(s) = B k (s) \E(s) satisfies equation (1) where the symbol | indi- 
cates that C(s) can be partitioned into B k {s) and E{s). 



III. PACKED CODES 



A model of the data transmission system is shown in Fig. 1. Let us 
assume that each quantization step is of equal size and that there are 2 



ANALOG 

DATA 
SOURCE 




UNIFORM 
QUANTIZER 




SOURCE SCALE 
TO BINARY 
CONVERTER 




ENCODER 




















1 




BINARY 

CHANNEL 












1 




DESTINATION 




BINARY TO 

SOURCE SCALE 

CONVERTER 




DECODER 









Fig. 1 — System model. 

quantization levels. For many applications, the quantizer uses a rel- 
atively small I (perhaps 15 or less). In addition, coding schemes must 
have low redundancy; otherwise so many information positions are 
converted into check positions that the quantization error becomes too 
large. These requirements lead us to define "packed" codes in the follow- 

* Bi (j) denotes the i-bit binary representation of the integer ; where 5? ;' ^ 
2« - 1. 



1222 



THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 



ing manner. Consider an (n, k) binary group code in which a samples 
are packed into each code vector. If each sample consists of I bits, then 
k = al. Let s m denote the integer that is transmitted for the wth sample 
in a code vector where ^ s m ^ 2' — 1 and 1 ^ m ^ a. Accordingly, 
the code vector actually transmitted is 



C(s) = £,(«.) I *i(«.-i) 



Bi(«OI*« 



where 



= 12" 



(2) 



A packed code vector is shown schematically in Fig. 2. 

Two examples are in order. In the first, a (7, 4) perfect single error- 
correcting code is used to form a packed code with a = 2 and I = 2. 



H = 



s 2 positions 



1110 
1 1 1 I 2 
ll 1 1 



J t_ 

! 1 I S! 



positions 



In the second example, the idea behind significant-bit codes is applied 
to packed codes and results in what will be referred to as a signifi- 
cant-bit packed code. 2 Specifically, the basic (7, 4) code can have its 
protection capabilities arranged to match the numerical significance of 
the bit positions; that is, to protect the most significant bit of each of 
four samples (a = 4 and 1 = 2). 



H = 



10 10 10 
1 1 1 l t 
ll 1 1 



J 



s 4 positions 



s 3 positions- 



T T 



s, positions 



-s 2 positions 



Notice that the significant-bit packed code requires only half as many 
parity check positions per sample as the packed code. 



TRANSMISSION ERROR 



1223 



r 




— n BITS — 




"*" 






B z Cs a ) 


BjtSa-i) 




BjCs,) 


E(S) 


*— Z BITS— » 


* — I BITS-* 




* — I BITS—* 


L n_k J 

r~ BITS "*l 



Fig. 2 — Packed code vector. 

Many packed codes can be designed to provide desired levels of 
protection and redundancy. Numerical data concerning the effective- 
ness of representative packed codes are presented in Sections VI and 
VII. 



IV. FORMULATION OF A CRITERION OF SYSTEM FIDELITY 

In this section, we develop a criterion of system fidelity as a func- 
tion of the number of quantization levels and the capability of the 
error-correcting code. This is done for packed codes because of their 
generality. 

Let x m denote the output of the analog source that results in s m 
being transmitted. It is assumed that x m is a random variable that is 
uniformly distributed on the interval (X lf X 2 ) . The probability den- 
sity function for x m is 



iM = 



1 



for X x ^ x m ^ X 2 



If 



X, + s r 



x 2 - X x 
= for x m < Xj or x m > X 2 . 



V XA <.r m <A' 1 + ( Sm + lA V A ' 



2 1 



J X 2 - X, 



then the output of the quantizer is 

X, + (s m + h\ ., 

The "source scale to binary converter" receives 
X, + (s m + |)(^^ 



(3) 



from the quantizer and delivers B t (s m ) to the encoder. After a samples 



1224 



THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 10C9 



are received by the encoder, the message 

B k (s) = B l (s a ) |fl,(s a _,)| ■■• IBM 

is encoded to form the code vector C(s) = B k (s) \E(s) where the 
value of s is determined from equation (2). At the destination, the 
decoder attempts to correct errors and provides the message 

B k (r) = B l (r a )\B l (r a _ 1 )\ ••• | B,(r.) 

at its output where ^ r m ^ 2' — 1 for 1 ^ m ^ a and 



E2 



(m-l)I 



(4) 



The "binary to source scale converter" receives B t (r m ) and delivers 

to the destination. Because uniform quantization is used, a useful 
measure of the numerical error that occurs as a result of the quantiza- 
tion and transmission of x m is 



x m - [x, + (r„ + fl(*^Sl)] 



where y > 0. The appropriate value of y will depend upon the nature 
and use of the signal. For this paper, let y = 1. 

For the with sample position in a packed code, let Pr m {r,„ | s. m } 
denote the probability that r m is received when s„, is sent. Accordingly, 
the average system error for the with sample (ase,„) is 



ASE 



2'-l 2'-l «X, + (« m + l) (X,-Xi)/2' 

.■EE 

Tm-0 i»-0 J Xi + im(.X,-XO/2> 



x m -X 1 -(r m + 



»(^^) 



•Pr m {r m | s m \Kx m ) dx m . (5) 

It is desirable to express Pr„,{r m | s m } in terms of the properties of 
the error-correcting code. Let Pr{r | s} denote the probability that r 
occurs at the output of the decoder when s is the input to the encoder. 
As shown in Appendix A, for a channel in which the errors are inde- 
pendent of the symbols actually transmitted, 



Pr, 



2'-l 2'-l 

- Z ■ ■ ZPr<|E2 (m '- 1)J ^ 

la-0 ti-0 

excluding t m 



TRANSMISSION ERROR 1225 

where B t (t m ) = Bi(r m ) © £/(s m ).* This expression is interesting be- 
cause it permits us to compute Pr„,{r m | s m } from the properties of the 
code. Specifically, it is necessary to determine the probability that 
each possible sequence of a samples, in which the ?nth position equals 
t m , is received, given that zero is transmitted for each sample, and 
then to sum these probabilities. 

For the case in which one sample is transmitted per code word (that 
is, a = 1 and I = k) and all samples are equally likely to be trans- 
mitted, the average numerical error (ane) that occurs during trans- 
mission has been defined as 2 



2*-l 2*-l 

ANE 



= ht,~t\r-s\Pr{r\s}. 

a r-n « = n 



The average numerical error is the average magnitude by which the 
output of the decoder differs numerically from the input to the en- 
coder and thus provides a measure of the performance of the channel 
and the code. This concept can be generalized by defining the average 
numerical error for the mth sample as 



-§r Z? Z;|r.-«.|ft.{r.|«.}. (6) 



2'-l 2'-l 

AMU,. .'. 

* r„-0 tm-0 

By reasoning analogous to that in Theorem 1 of Ref. 2, for a binary 
group code used with a binary symmetric channel, equation (6) can 
be reduced to 

ANE m = 22'"' £ Pr m [r m |0|. 

With this definition of ane w , the probability density function in 
equation (3), and the steps shown in Appendix B, the average system 
error for the mth sample, as given in equation (5) , can be expressed as 



ASE, 



" { ^2' Xl )(ANE m + iPr m {0|0}). 



One feature of packed codes is that the protection afforded various 
samples against transmission errors can be unequal. If this occurs, 
different positions will have different system error. Therefore, in 
general, the average system error per sample (ase) is 

ASE = ~ 2-J ASE m • 



* The symbol © denotes component by component modulo 2 addition of vectors. 



1226 THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 

The range of the analog source is specified by X x and X 2 . When con- 
sidering system design, it is convenient to let Xo — Xi = 1 (or to con- 
sider a normalized average system error). Accordingly, in the re- 
mainder of this paper, we shall be concerned with the expression in 
equation (7). 



ASE = - t, [^(ANE m + J Pr m {0 I 0})1- 
oc m =l L^ J 



(7) 



For a system in which one sample is transmitted per code word (that 
is, a = 1 and I = fc), 

ase = |r(ANE + iPr {0 | 0}) (8) 

where ane and Pr{0 | 0} are for the entire code. 

For error-free transmission, Pr„,{0 | 0} = 1 and ANE m = for all 
coding schemes including uncoded transmission. In this case, ase = 
2-(2+2) Thus, the system error is independent of the particular code, 
is minimized by maximizing I, and cannot be reduced to zero but is 
bounded by the quantization error. 

V. THE AVERAGE SYSTEM ERROR FOR UNCODED TRANSMISSION 

Before examining the role that error-correcting codes can play in 
reducing the average system error, it is advantageous to consider sys- 
tem effectiveness when uncoded transmission is used with a memory- 
less channel. In the system model, uncoded transmission is charac- 
terized by a = 1 and I = k = n. Let asnuc denote the average system 
error for uncoded transmission. From Theorem 2 and the comment 
following the proof of the theorem in Ref. 2 (these are summarized 
in Appendix C), the average numerical error for uncoded transmis- 
sion is 

2 g =2 p — • 

1 - | 

The probability of correct transmission is q'. Therefore, from equa- 
tion (8) 



ASE UC nl 



-hivtW-' + 'f)- O) 



TRANSMISSION ERROR 



1227 



Figures 3 and 4 present the average system error for uncoded trans- 
mission for representative values of I and p. 

For each value of I, notice that as p -> 0, ase D c -* 2 _(l+2) which is 
the limitation imposed by the quantization error. Also, ase D c increases 
monotonically with p for < p < Y> (see Appendix D). For a given 
value of I, how large must p become so that aseuc deviates appreci- 
ably from 2~ (J+2) (that is, for what values of p does the transmission 
error make a significant contribution to the system error?) 

For small p, equation (9) yields 



AS EDC -i[(2'-l-^ + |]. 



(10) 






1=5 



Z = I5_ 



-9 



&L 






'* 



'4' 



^■f 



# 



3* 



?* 



'10-3 



Fig. 3 — Average system error for uncoded transmission (asedc) for various I. 



1228 



THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 





















^-'" 














10"' 






























"^ 




? 


























■ ' 




f 














6 












## ,^ 


.— •** / 


ft 




























'""■<* 


If 


























u 2 

LU 

to 

< 10- 2 


"4 




„,~- 


s 


s' 


SAP 

f 
























^ 


/, 


// 
























6, 


^~ 


^ 


y 
























4 




























































to -3 


7/ 


= 14 





























A 6 8 ln - 2 



Fig. 4 — Average system error for uncoded transmission (ase D c) for various I. 
This expression can be broken into two components; the term 

"-■-i-i 



2' 

and the term 2~ (l+2) . These components are shown in Fig. 5 for I = 
15. In Fig. 5, the terms intersect at a probability of error denoted by 
p c where 



4 2 1 - 1 - 



Pc = 



Notice that p e is the value of p for which the transmission error equals 
the quantization error [within the approximations leading to equa- 
tion (10)]. Accordingly, for p = p c , ase uc = 2 _u+1) . In Fig. 6, p c is 
given for various I. From p e , it is possible to obtain an estimate of the 
general region in which ase uc begins to deviate from 2~ ( ' +2) because of 
transmission errors. 



TRANSMISSION ERROR 



1229 



An additional feature of Figs. 3 and 4 is that, for a given value of I 
and for p greater than the appropriate p c , ase dc is approximately equal 
to p. This causes the converging of the curves as p increases and im- 
plies that systems with different I will have essentially the same 
performance. Let us consider qualitatively the cause of this phenome- 
non. 

For p > Pc, the transmission error is significantly greater than the 
quantization error and, thus, the average system error is largely deter- 
mined by the transmission error. If a single error occurs in a sample 
and if it occurs in the most significant position, on the average, a 
numerical error of Vfc will result for any I. For values of p that are of 
practical interest, the probability that this occurs is essentially in- 
dependent of I and equal to p. Similar reasoning can be applied to the 
less significant positions although the numerical error that results 
will, of course, be less than y%. The point is that the probability that 



e 






















/ 


























■<■ 






























/y 

// 
// 
























// 
// 
// 

/ 


















X' r 


























/ / 

/ . 

























ASE UC - y 


/ / 
/ / 

f' 

15-4 

2' 5 


1 






















/ / 
/ / 


75 p 








10- 5 


^ .A 


/ 
/ 

-/- FACTOR OF 


2 














/ 


















L -Tj 

/ 




g-17 




















/ 
/ 
/ 






■ 
















/ 

/ 

/ 
/ 


4 












I 





4 6 8,__. 



Fig. 5 — Average system error for uncoded transmission (aseuc) for I = 15. 



1230 THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 19G9 



10-3 



io- 



~ 



> 



>=k. 



-=fc- 



X 



Nk 



M. 



M 



\x 



V 



>a. 



2 4 6 8 10 12 14 16 18 20 

I 

Fig. 6 — p c for various I. 

these single errors occur and the numerical errors that result are es- 
sentially independent of I. This implies that the transmission error 
(and thus the average system error) will be relatively insensitive to I. 

Notice that p c decreases as I increases. The reason is that the quanti- 
zation error decreases as I increases whereas the transmission error is 
approximately independent of I. Thus, the value of p where the trans- 
mission error becomes a significant portion of the system error de- 
creases. 

From equation (10), no system using uncoded transmission can 
have an average system error significantly less than p no matter how 
large I becomes. This leads to the problem of how to make the average 
system error less than p. 

Suppose that the a most significant positions per sample are pro- 
tected by coding and that the remaining (I — a) positions are not pro- 
tected. Further, assume that sufficient protection is provided so that 
the probability of error in the protected positions is substantially 
less than p. Under these conditions, the transmission error is deter- 
mined primarily by errors in the least significant positions and we 
can consider the protected positions to be free of errors. Then, from 
Theorem 2 of Ref. 2 (summarized in Appendix C), 

For values of p that are of practical interest, 



TRANSMISSION ERROR 1231 



I - i 



Ml 



2'--l -^^P + 7 • (ID 



ASE = ^ 

Accordingly, for p in the range where transmission is the major source 
of system error, the average system error can be reduced by a factor 
of approximately 2~ a from the average system error for uncoded trans- 
mission. This implies that we should seek codes that can both protect 
the significant positions of each sample and maintain quantization 
resolution by requiring small redundancy. The above requirements 
provide the motivation for significant-bit packed codes. 

VI. SOME EXAMPLES OF SYSTEM PERFORMANCE WITH CODINC, 

In this section we assume that a predetermined number of positions 
(denoted by £) are available to transmit each sample. By numerical 
evaluation, the average system error that results from the use of 
representative coding schemes (for £ = 7 and | = 15*) is determined 
for various values of p. The examples illustrate that system perform- 
ance depends upon p and upon the manner in which the £ positions 
are allocated between information bits and redundancy for error 
control. 

Let aseuc denote uncoded transmission. First, consider codes in 
which one code vector is used per sample (« = 1). Listed below is a 
brief description of each code. The codes are indexed by the notation 
used for their average system error in Fig. 7 (£ = 7) and Fig. 8 U 
= 15). 

ase (3i i): A (3, 1) perfect single error-correcting code is used to pro- 
tect the most significant position. 

£ = 7: a = 1 1 = 5 

£ = 15: a = 1 I = 13 

ASB (3f i),( 3,d: Independent (3, 1) perfect single error-correcting codes 
are used to protect the two most significant positions. 

£ - 7: a = 1 1 = 3 

t = 15: a = 1 I = 11 



* These values were selected because in each case it is possible to construct a 
perfect single error-correcting code and thus to compare uniform protection 
with protection that is heavily weighted in favor of the most significant bit per 
sample. 



1232 THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 




Fig. 7 — Average system error (ase) with representative codes; 7 positions per 

sample (£ = 7). 

ASE(7 i4) : A (7, 4) perfect single error-correcting code is used to pro- 
tect the four most significant positions. 



f - 7: 
€ = 15: 



a = 1 
a = 1 



I = 4 
J = 12 



ASE( 15il i): A (15, 11) perfect single error-correcting code is used to 
protect all 11 positions. 



= 1 



I = 11 



* = 15: 

Although many significant-bit packed codes can be constructed, we 
consider only three examples. They were selected because the codes 
should protect the most significant positions of each sample and 
because a small number of parity check positions per sample should 
be used so that we can reasonably consider 2 l quantization levels. 
The codes illustrate the general capabilities of significant-bit packed 
codes and are easy to implement. One prime is used in the average 
system error notation to indicate that the most significant position 
of each sample is protected and two primes to indicate that the two 
most significant positions of each sample are protected. Let p de- 



TRANSMISSION ERROR 



1233 



note the number of parity check positions per sample where p = 
(n—k)/a. Let R denote the code rate where R = k/n. 

ABBfu.u) : A (15, 11) perfect single error-correcting code is used in 
a significant-bit packed code to protect the most significant position 
of each sample. 

1 = 7 p = 0.36 R = 0.950 



I = 15 p = 0.36 R = 0.976 



$ = 7: a = 11 

£ = 15; a = 11 

ase{ 31 ,26> : A (31, 26) perfect single error-correcting code is used in 
a significant-bit packed code to protect the most significant position 
of each sample. 

£ = 7: a = 2Q 1 = 7 p = 0.19 R = 0.974 

£ = 15: a = 26 I = 15 P = 0.19 R = 0.987 

ase^ Ii26) : A (31, 26) perfect single error-correcting code is used in 
a significant-bit packed code to protect the two most significant 




4 6 8 ..-3 



Fig. 8 — Average system error (ase) with representative codes; 15 positions per 

sample (£ = 15). 



1234 THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 

positions of each sample. 

f = 7: « = 13 1 = 7 p = 0.38 R = 0.948 

£ = 15: a = 13 Z = 15 p = 0.38 fl = 0.975 

We can make the following observations concerning system per- 
formance when codes are used. In all cases, as p -» 0, ase — > 2~ (,+2) 
which is the limitation on system performance because of quantiza- 
tion. As I increases, the quantization error decreases. Thus, the value 
of p for which the transmission error becomes a significant portion of 
the system error decreases. In other words, if you design for good 
quantization resolution, then you need a good channel. This implies 
that, as the number of positions per sample increases, codes are use- 
ful for smaller values of p in order to bring the channel up to the 
required quality. 

Because all a = 1 codes necessitate a sizable reduction in I to al- 
low for redundancy, they are only attractive for larger p where con- 
siderable coding capability is required. For these p, we have demon- 
strated that system performance can be improved (by an appreciable 
amount in some cases) by sacrificing quantization resolution for an 
improvement in transmission fidelity. However, because significant- 
bit packed codes provide protection for the most significant positions 
without the large penalty in quantization resolution required by the 
a = 1 codes, significant-bit packed codes are effective for considerably 
smaller values of p than are the « = 1 codes. 

Notice that ase' 31i26) and ase' 1511) are nearly equal. The reason is 
that although the significant-bit packed code using the (31, 26) code 
provides less error protection than the significant-bit packed code based 
on the (15, 11) code, in each case the protection provided for the most 
significant position is "sufficient' ' and, thus, the errors that hurt are 
coming in the less significant positions. 

On the other hand, ASE^ 12fl) is less than either ase( 31i26) or ase{ 18i11) 
for the values of p where significant-bit packed codes are preferable. 
The reason is that errors are now nearly eliminated in the two most 
significant positions in each sample. Further reductions in system error 
could be achieved by using significant-bit packed codes which protect 
three or more positions per sample. However, we must be careful not to 
go too far or we should begin to charge the redundancy against quan- 
tization resolution. 

Significant-bit packed codes achieve an effect similar to interleav- 
ing. Thus, although the computations herein have been for independ- 



TRANSMISSION ERROR 



1235 



ent errors, significant-bit packed codes could prove useful for a channel 
with clustered errors. 

VII. SIGNIFICANT-BIT PACKED CODES FOR DIFFERENT I 

Several interesting points are illustrated in Fig. 9. Indexed on the 
left are the four values of I considered. For I = 15, ase uc is shown. 
For I = 15, 14, 13, and 12, ase( 31i28) and ASE^ li2e) are given. 

The following observations concerning Fig. 9 can be made. For 
small p, the I = 15 schemes are best. This is to be expected because 
quantization is the major source of system error for small p. 

However, for larger p, the significant-bit packed codes with I < 15 
have less system error than uncoded transmission for I = 15. This is 
particularly interesting because, in these significant-bit packed codes, 
more positions are saved by reducing I than are added by the parity 
check positions. For example, in the I = 13 system that results in 
a se( 31 26) , a = 26 and n = 343. If uncoded transmission with I = 15 is 




1<r 3 



Fig. 9 — Average system error (ase) with significant-bit packed codes. 



1236 THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 

used to send these 26 samples, 390 positions are required. Thus, for 
p > 4.5 -10 -8 , this significant-bit packed code reduces system error 
and saves 47 positions every 26 samples. Similar behavior can be noted 
for other significant-bit packed codes considered in Fig. 9. 

For p = 10~ 3 , the three systems with a = 1 converge to approximately 
2 -1 ASE D c and the three systems with o- = 2 converge to approximately 
2" 2 ASEuc even though the systems use different quantization resolu- 
tions. However, for p = 10~ 6 , the convergence is determined by I. This 
clearly demonstrates the two extreme cases in system behavior: limita- 
tion by transmission error and limitation by quantization error. 

VIII. THE SYNTHESIS PROBLEM — AN EXAMPLE 

Suppose that the probability of error and the maximum allowable 
average system error are specified. Let these be denoted by p 8 and 
ASEg respectively. From equation (11), <x and I should be chosen to 
satisfy the relation 

ase. > 2-'p. + 2~ ( ' +25 (12) 

where a represents the number of protected positions per sample. 
Because equation (11) is an approximation, values of I and a that 
satisfy equation (12) cannot be guaranteed to provide a system with 
an ase ^ ase. . However, as o- decreases compared with I, equation (12) 
becomes increasingly reliable.* 

Notice that I and o- appear as negative exponents in equation (12). 
Therefore, for a given p 8 , a wide range of values for the ase s can be 
achieved by varying I and <s. Also, equation (12) frequently can be 
satisfied by several pairs of values for I and <r. For each pair, there 
may be several possible coding schemes. The system designer must 
then choose the final system configuration from these candidates on 
the basis of such items as the cost of implementation or the number 
of positions in the data stream per sample. 

As an example of system design, consider a telemetry channel in 
planetary space missions. This channel can often be modeled satis- 
factorily by the memoryless binary symmetric channel and typically 



* A major assumption leading to equation (11) is that all of the average nu- 
merical error comes from the unprotected positions. However, if a is large, then 
errors in the protected positions result in a much larger numerical error than 
errors in the unprotected positions. Therefore, even though errors in the pro- 
tected positions are less likely, a significant portion of the average numerical 
error can come from these positions. 



TRANSMISSION ERROR 



1237 



has a bit error rate of 5-10" 3 . Thus, equation (12) becomes 

ase. > 5-l(T 3 -2-'+2- < ' + 2) . (13) 

If uncoded transmission is a system requirement, then o- = and 

ase. > 5-10" 3 +2- ( ' +2) . 

Notice that successive increases in I result in successively smaller 
reductions in the average system error and that the average system 
error can never be less than 5-10" 3 . From Fig. 4, all systems with 
I ^ 8 have essentially the same average system error and, thus, little 
is gained by using I > 8. 

A more interesting situation exists if the system designer is per- 
mitted to choose I and the coding scheme. If ase s > 5-10- 3 , it is pos- 
sible to design a system using uncoded transmission although coding 
could prove effective as ase s approaches 5 ■ 10 -3 . However, if ase 8 < 
5-10 -3 , some form of coding is mandatory. Conversely, from equa- 
tion (13), if coding is used, the system error can be made small by 
choosing appropriate values of I and o-. In Table I, the approximate 
average system error is given for representative I and o-. The informa- 
tion in Table I was computed by using equation (11) and, thus, is 
subject to the assumptions and approximations leading to equation 
(11) . However, from Table I, the improvements in system performance 
that can be achieved by coding are evident . 

Table I — Approximate Average System Error (ase) for 
Representative I and <t; p = 5-10" 3 



l 


1 
a Approximate ask 


7 


1 
2 

o 

o 


4.4-10- 3 
3.2-10-3 
2.5- lO" 3 


S 


1 
2 

•> 


3.5-10-3 
2.2-10-3 
1.6- lO" 3 


9 


1 

2 

3 


3.0-10-3 
1.7-10-3 
1.1 -lO" 3 


10 


1 
2 

3 


2.7-10-3 
1.5 -lO- 3 
8.7-10" 4 



1238 THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 

Consider the following specific example which illustrates certain 
alternatives in code selection without requiring extensive computational 
effort. Suppose ase, = 4«10" 3 . From equation (13) or Table I, we can 
use a = 1 and I = 8 or a = 2 and I ^ 7. The minimum values of I will 
be used. Several coding schemes are possible in each case. The codes, 
indexed below by the notation used for their average system error in 
Fig. 10, follow the ideas in Section VI. Thus, the parity check matrices 
are not presented. 

For o- = 1, I = 8: 

ASE (3i i): A (3, 1) perfect single error-correcting code is used to pro- 
tect the most significant position. 

a = 1 1 = 8 

ASE{ 15n) : A (15, 11) perfect single error-correcting code is used in 
a significant-bit packed code to protect the most significant position 
of each sample. 

a = 11 1 = 8 

ASE(3i, 2S) : A (31, 26) perfect single error-correcting code is used in 
a significant-bit packed code to protect the most significant position 
of each sample. 

a = 26 1 = 8 

For <r = 2, I = 7: 

ASE(s4),(8,i): Independent (3, 1) perfect single error-correcting codes 
are used to protect the two most significant positions. 

<x = 1 1 = 7 

ASB<s, i38) : A (31, 26) perfect single error-correcting code is used in 
a significant-bit packed code to protect the two most significant 
positions of each sample. 

a = 13 1 = 7 

The design objective, denoted by an asterisk in Fig. 10, is satisfied 
by each system although the systems vary somewhat in performance 
for other p. Notice that the systems differ in the coding equipment and 
quantization resolution required for implementation. Also, notice that 
the systems vary in the number of positions per sample in the data 
stream [from a low of 7.4 for ase(£, i26) to a high of 11 for ASH( 8 ,i),(a.i)]> 
Which system would actually be selected would thus depend upon the 
details of the specific application. 



TRANSMISSION ERROR 



1239 































^ 






























// 






























// 


/ 


/ 


— 


1 = 1 SYSTEMS 
I =8 SYSTEMS 














ASE W)-^ 


TZ 


Y 

/ 
/ 






















A 


SE 


'(3 


d 


/ 


Ca SE(3,) 
























//// 


/ 




























r / ' 


■ 








-ASE S - 
















/ty 




/ase 


(3,1), (3, 


) 














ase;' (3 
X 


1,26) ^,-J 


i 

I 






























t 

Ps 






























1 

















































10-3 



5 !0-2 



Fig. 10 — Systems for space telemetry channel. 



IX. CONCLUSIONS 

A general formulation of the error introduced by quantization and 
transmission has been developed for the data transmission system 
shown in Fig. 1. It has been shown that system performance is in- 
fluenced by both the quantization resolution and the channel error 
characteristics, that certain levels of performance cannot be achieved 
without the use of coding no matter how fine the quantization, and 
that performance can, in some cases, be improved by sacrificing 
quantization for redundancy and error control. In general, when 
coding is used, it is beneficial to use codes that match their protection 
to the numerical significance of the information positions. Significant- 
bit packed codes are particularly useful because they provide protec- 
tion for the most significant positions without incurring a large penalty 
in quantization resolution. The problem of determining the coding 
capability and the number of quantization levels required to achieve 
a specified average system error has been considered. 

The specific results are based upon the choice of y = 1 in Section 
IV. However, varying y simply changes the "cost" assigned to the 



1240 THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 

numerical errors and, thus, the general ideas presented here are ap- 
plicable for any y > 0: for example, the desirability of the system 
approach to quantization and transmission error, the possibility of 
improving system performance by sacrificing quantization resolution 
for redundancy, and the use of codes that concentrate protection on 
the numerically most significant positions. Actually, it appears that 
as 7 increases, the desirability of protection for the most significant 
positions also increases. 

Because of the unit distance properties of Gray codes, it is natural 
to inquire whether Gray codes could prove useful in the system dis- 
cussed in this paper. It can be shown (for y = 1) that a Gray code 
with 2' levels gives exactly the same average numerical error and 
average system error as the natural binary numbering with 2 l levels 
even when error-correcting codes are used. 

APPENDIX A 

Derivation of an Expression for Pr m {r m \ s m } 

Let Pr m {r m | s m } denote the probability of receiving r m when s m is 
transmitted using a packed code. Let Pr {s,-} (1 S i ^ a) denote the 
probability that s { is transmitted. Then 

Pr m {r m |s m } = E'-'E E "• ZPr{r|8} Pr M •••Pr{s 1 } 

r n =0 n-0 « o -0 «i-0 ' v ' 

excluding r m and s m excluding Pr {s m } 

where the values of r and s are determined from equations (4) and (2), 
respectively. However, Pr {«,) = 2"' for 1 ^ i ^ a, i ?* m. Thus, 

2'-l 2'-l 2'-l 2'-l 

Tr m [r m \s m ] =-^7 E • • • E £ "■ EPr{r| S }. (14) 

2 r,=0 r,-0 « a =0 »,-0 

excluding r m and s m 

The expression in equation (14) can be simplified. From equations 
(2) and (4), equation (14) can be written as 

2 '-l 2'-l 2'-l 2'-l 

Pr m {r m | s m } = , a _ iu E ' ' ' H H •■' E 

2 «a=0 ., = r a -0 r,-0 

excluding r m and s m 
•Pr{E2 (m '- 1) V m . E 2 ( -'- ,), «..V (15) 

By Lemma 1 of Ref. 2, for a binary group code used with a binary 
symmetric channel in which the errors are independent of the symbols 



TRANSMISSION ERROR 



1241 



actually transmitted, 



Pr E 2 (m '- 1) '/ m . 



= Prl2 , "- 1, V Bl 



E 2 (m '- n 's B 



where £*U m .) = Bi(r m -) © Bj(s„,0. By Lemma 2 of Ref. 2, equation 
(15) can be written as 



2'-l 2'-l 2'-l 2»-l 



Pr m {r m | s m } 

-r^SI '£ •••'£ E-S Pr|E 2—"'/, 

2 «a-0 «.-U (a-0 <i-0 Ul'-l 

excluding s m and / m 
which reduces to 

Pr.|r.|«.) = Z ••• E PrJE^-'-'C 

« a -0 »i-0 Vm'-l 

excluding / m 

APPENDIX B 

Reduction of the Expression for the Average System Error 
By substituting equation (3) into (5) and rewriting, 

1 



ASE m = 



(A2 -^l) r m -0 «m-0 

Xt + («« + l)(X,-Xi)/2' 



2'— 1 2'-l 

E E Pr- {r« I ** 



jiA 1 +llm + l)U|-Al 

pX, + llm + i)lX t -X, 
JXi+lm(X 1 -Xx)/2' 



x m -X x - (r m + 



*(&#*) 



dx ra . 



However, 

.X, + (fm + l)(X,-.Yi)/2' 



l\(z—L-H—~Z± 



dx» 



x m - X, - (r m + |) 



where 



Thus, 



5r m . m = 1 for r m = s m 
= for r m 9^ s m . 



2'-l 2'-l 



^ r m -0 tn-0 



v Y 2'-l 2»-l 

+ ^-9^ 2 E Pr m {r m | s m ] 5 rm „ 

*'« r m -0 tm-0 



1242 THE BELL SYSTEM TECHNICAL JOUENAL, MAY-JUNE 1969 

The average numerical error for the with sample was defined in equa- 
tion (6) as 



-, 2'-l 2'-l 

ANE m = rj 2 2 I r m - ««. , 

« f»-0 l.-O 

In addition, it can be shown that for a channel in which the errors are 
independent of the symbols actually transmitted, 

S 2 Er. fo. I *»} «r... = 2' Pr m {0 | 0}. 

r»-0 »m-n 

Therefore, 

ase„ = ( Xg ~ Xl )(ANE m + I Pr. {0 | 0}). 

APPENDIX C 

Theorem 2 of Reference 2 

A significant-bit code is a code in which the (fc— fco) most signifi- 
cant positions are protected by what is referred to as a base code and 
the remaining k positions are transmitted unprotected. For the base 
code when used alone, let Pr B {0 | 0} denote the probability that the 
output of the decoder is the zero message when the input to the en- 
coder is the zero message. Also, let ane b denote the average numerical 
error of the base code. The average numerical error for the signifi- 
cant-bit code is given by Theorem 2 of Ref. 2: 

Theorem 2: Let the base code be defined as above. For a binary sym- 
metric channel with independent errors and when all messages are 
equally likely to be transmitted, 

ane 8b = Pr fl {0 | 0}p £ 2'-y--'' + 2*-ane b . 

Uncoded transmission is the special case where k = fc . Thus, the 
average numerical error for uncoded transmission can be obtained by 
letting ane b = and Pr B {0 | 0} = 1 when k = fc . 

appendix d 

Proof that the Average System Error for Uncoded Transmission In- 
creases Monotonically with p 

In Section V, equation (9) gives the average system error for un- 



TRANSMISSION ERROR 1243 

coded transmission as 

ASE UO = ^( P E2*-y-' + £)- 

After differentiating with respect to q and grouping terms, 



dASE uc _ J_ 
dq ~ 2' 

For£ < q < 1, 



[-(< ~ iV" - £ (z " * )(2 ' " a" 1 )*'"" 1 ]- 



dASE uc 
dq 



< 0. 



Thus, ase uc decreases monotonically as q goes from % to 1 or, alter- 
natively, ase uc increases monotonically as p runs from to %• 



APPENDIX E 

Parity Check Matrices for Codes Considered in Section VI 

ase (3i i): A (3, 1) perfect single error-correcting code to protect the 
most significant position. 
I = 7: 



H = 



10 
[l 



= 1 



I = 5 



I = 15: 



H = 



1000000000000 
ll 000000000000 



a = 1 



I = 13 



ASE(3 t i),(3,i): Independent (3, 1) perfect single error-correcting codes 
to protect the two most significant positions. 

6 = 7: 



H = 



fl 


"1 


li 




1 


u 


1 




.0 1 


- 



a = 1 



I = 3 



H = 



= 1 



I = 11 



1244 THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 

£ = 15: 

10000000000 
10000000000 
01000000000 

.0 1000000000 

ASE (7i 4) : A (7, 4) perfect single error-correcting code to protect the 
four most significant positions. 

( = 7: 



u 



H = 



fl 


1 


1 







1 


1 





1 


h 


1 





1 


1 


t. 



a = 1 



I = 4 



$ = 15: 



H = 



111000000000 
110100000000 7 3 
101100000000 



= 1 



I = 12 



A SE(i5,ii) : A (15, 11) perfect single error-correcting code. 
6 = 15: 



H = 



/, 



a = 1 



I = 11 



11111110 

11110 1110 

110 110 110 1 

.10101011011 

ASE (i5.ii) : A (15, 11) perfect single error-correcting code in a sig- 
nificant-bit packed code. 

€ = 7: 

'l 1111110000 



H = 



1 a l 6 l a l 6 ^0 a ° G ° 6 1 6 1 6 1 6 ° 6 7 4 



110 1 


1 


1 


1 1 


.10 10 1 





1 1 


1 1 


« = 11 i = 7 


p = 


0.36 


R = 0.950 



TRANSMISSION ERROR 



1245 



€- 15: 



H = 



ASE 



11111110 

1 14 X 0m 1 0„ 1 0x4 14 0,4 0,4 l 0,4 X 0,4 X 0, 4 14 U 

110 110 110 1 

LI 0101011011 

a - 11 Z = 15 p = 0.36 22 = 0.976 

: A (31, 26) perfect single error-correcting code in asignifi- 



(31,28) 

cant-bit packed code. 

€ = 7: 

1111111111111 
1111111000000 
// - 1 6 1 1 a 1 6 Oa fl 6 6 1 6 1 6 1 fl 1 a 6 o a 
1001100110011 
0101010101010 



£= 15: 

1 

1 



H = 



100000000000 
011111110000 

1 a 1 6 1 Oa 1 8 6 8 8 1 Oa 1 6 1 0« 6 I 5 



1 


1 





1 1 


110 


1 


1 1 





1 


1 


110 1 


1 


a = 26 


I = 


= 7 


p - 0.19 


R = 0.973 




1 1 


1 


1 


1 1 






1 1 


] 


1 


1 1 







1 0, 4 1 0, 4 1 0,4 1 0,4 0,4 14 14 

110 110 
10 10 10 1 



1246 



THE BELL SYSTEM TECHNICAL JOURNAL, MAY-JUNE 1969 



111111110 

1000000011 
Oh 1 Oh 1 14 1 14 1 14 14 M 0x4 1 Oh 1 14 
110 110 11 
10 10 10 110 



000000000 
111110 
1 14 1 14 14 Oh 14 1 0, 4 1 0, 4 1 14 14 h 
110 110 1 
10 10 110 11 
a = 26 I = 15 P = 0.19 R = 0.987 

ASE (3i .28) : A (31 , 26) perfect single error-correcting code in a significant- 
bit packed code. 

£ = 7: 

'll 11 11 11 11 11 
11 11 11 11 00 00 
II 11 fi 11 6 00 5 00 5 11 6 11 5 



11 


00 


11 00 11 00 








.10 


10 


10 10 10 10 












11 10 00 00 


00 


00 


00 






00 01 11 11 


11 


00 


00 



= 13 



00 6 01 6 11 S 10 8 00 a 11 5 10 S 7 6 
11 01 10 01 10 11 01 
10 11 01 01 01 10 11 
I = 7 P = 0.38 R m 0.948 



TRANSMISSION ERROR 1247 

£ = 15: 

11 11 11 11 11 11 
11 11 11 11 00 00 

II ' - 11 I3 11 13 00 0,3 00 0,3 11 0,3 11 0,3 

11 00 11 00 11 00 
.10 10 10 10 10 10 

11 10 00 00 00 00 00 
00 01 11 11 11 00 00 

00 0,3 01 0,3 11 0,3 10 0,3 00 0,3 11 0,3 10 0,3 h 

11 01 10 01 10 11 01 
10 11 01 01 01 10 11 
a = 13 I = 15 p = 0.38 R = 0.975 

REFERENCES 

1. Masnick, B. and Wolf, J. K., "On Linear Unequal Error Protection Codes," 

IEEE Trans. Inform. Theory, IT-IS, No. 4 (October 1967), pp. 600-607. 

2. Buchner, M. M., Jr., "Coding for Numerical Data Transmission," BJ3.TJ., 

46, No. 5 (May-June 1967), pp. 1025-1041. 

3. Peterson, W. W., Error Correcting Codes, New York: M.I.T. Press and John 

Wiley and Sons, 1961.